Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Group activity recognition based on partitioned attention mechanism and interactive position relationship
Bo LIU, Linbo QING, Zhengyong WANG, Mei LIU, Xue JIANG
Journal of Computer Applications    2022, 42 (7): 2052-2057.   DOI: 10.11772/j.issn.1001-9081.2021060904
Abstract276)   HTML15)    PDF (2504KB)(104)       Save

Group activity recognition is a challenging task in complex scenes, which involves the interaction and the relative spatial position relationship of a group of people in the scene. The current group activity recognition methods either lack the fine design or do not take full advantage of interactive features among individuals. Therefore, a network framework based on partitioned attention mechanism and interactive position relationship was proposed, which further considered individual limbs semantic features and explored the relationship between interaction feature similarity and behavior consistency among individuals. Firstly, the original video sequences and optical flow image sequences were used as the input of the network, and a partitioned attention feature module was introduced to refine the limb motion features of individuals. Secondly, the spatial position and interactive distance were taken as individual interaction features. Finally, the individual motion features and spatial position relation features were fused as the features of the group scene undirected graph nodes, and Graph Convolutional Network (GCN) was adopted to further capture the activity interaction in the global scene, thereby recognizing the group activity. Experimental results show that this framework achieves 92.8% and 97.7% recognition accuracy on two group activity recognition datasets (CAD (Collective Activity Dataset) and CAE (Collective Activity Extended Dataset)). Compared with Actor Relationship Graph (ARG) and Confidence Energy Recurrent Network (CERN) on CAD dataset, this framework has the recognition accuracy improved by 1.8 percentage points and 5.6 percentage points respectively. At the same time, the results of ablation experiment show that the proposed algorithm achieves better recognition performance.

Table and Figures | Reference | Related Articles | Metrics